-
Notifications
You must be signed in to change notification settings - Fork 115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add PytorchQuantumModel
for quantum circuits training with autograd
#208
Conversation
@kinianlo Thanks for this, at first glance looks really good! @nikhilkhatri will have a detailed look shortly, but in the meantime, it would be great if you could post a notebook that shows how it works in practice? |
If you can also rebase on the current branch it would be great. |
c426086
to
e3d8058
Compare
Sure here is an example how the new model can be used: from lambeq.backend.grammar import Cup, Id, Word
from lambeq import AtomicType, IQPAnsatz, PytorchQuantumModel
import torch
torch.manual_seed(42)
N = AtomicType.NOUN
S = AtomicType.SENTENCE
ansatz = IQPAnsatz({N: 1, S: 1}, n_layers=1)
diagrams = [ansatz((Word("Alice", N) @ Word("runs", N >> S) >> Cup(N, N.r) @ Id(S))),
ansatz((Word("Bob", N) @ Word("jumps", N >> S) >> Cup(N, N.r) @ Id(S)))]
labels = torch.tensor([0, 1])
model = PytorchQuantumModel.from_diagrams(diagrams)
model.initialise_weights()
optimizer = torch.optim.Adam(model.parameters(), lr=1e-2)
loss_fn = torch.nn.CrossEntropyLoss()
for epoch in range(10):
optimizer.zero_grad()
pred = model(diagrams)
loss = loss_fn(pred, labels)
loss.backward()
optimizer.step()
print(f'Epoch {epoch}, Loss: {loss.item()}') which prints
Another example with from lambeq.backend.grammar import Cup, Id, Word
from lambeq import AtomicType, IQPAnsatz, PytorchQuantumModel
from lambeq import PytorchTrainer, Dataset
import torch
torch.manual_seed(42)
N = AtomicType.NOUN
S = AtomicType.SENTENCE
ansatz = IQPAnsatz({N: 1, S: 1}, n_layers=1)
diagrams = [ansatz((Word("Alice", N) @ Word("runs", N >> S) >> Cup(N, N.r) @ Id(S))),
ansatz((Word("Bob", N) @ Word("jumps", N >> S) >> Cup(N, N.r) @ Id(S)))]
labels = [[1, 0], [0, 1]]
model = PytorchQuantumModel.from_diagrams(diagrams)
model.initialise_weights()
trainer = PytorchTrainer(model,
loss_function=torch.nn.CrossEntropyLoss(),
optimizer=torch.optim.Adam,
epochs=10,
learning_rate=1e-2)
dataset = Dataset(diagrams, labels)
trainer.fit(dataset) which prints:
|
Done! |
The current implementation of
This approach works but definitely not the best design and it introduces all sorts of type check errors. Perhaps @nikhilkhatri would have a better idea on how to structure the inheritance. |
Thanks a lot for this PR @kinianlo, it looks largely very good!
|
Thanks for the feedback @nikhilkhatri. I have added the test for mixed states and removed However the type problems still stand. Mostly because Ofcourse one way out would be to not inherit |
Hi @kinianlo, great work on this PR, thank you! Could you rebase your branch with the latest |
d69526a
to
978e970
Compare
@neiljdo Thanks! I have rebased the branch. |
@kinianlo sorry for the trouble but can you rebase again with the latest EDIT: I've rebased and updated your branch. |
978e970
to
67d98be
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is good work, we'll try to include in the upcoming official release (some time within next month)!
This PR implements a model
PytorchQuantumModel
that allows training quantum circuits with pytorch gradient tracking.This also fixes issue #205.